# Masked Language Model
Rumodernbert Small
Apache-2.0
A modern Russian version of a unidirectional and bidirectional encoder Transformer model, pre-trained on approximately 2 trillion tokens of Russian, English, and code data, with a context length of up to 8,192 tokens.
Large Language Model
Transformers Supports Multiple Languages

R
deepvk
619
14
Hplt Bert Base Sk
Apache-2.0
A monolingual Slovak BERT model released by the HPLT project, trained on the LTG-BERT architecture, suitable for masked language modeling tasks
Large Language Model
Transformers Other

H
HPLT
23
2
Mizbert
Apache-2.0
MizBERT is a masked language model (MLM) pre-trained on Mizo text corpora, utilizing the BERT architecture to effectively learn contextual representations of Mizo vocabulary.
Large Language Model
Transformers

M
robzchhangte
36
3
Esm2 T33 650M UR50D
MIT
ESM-2 is a state-of-the-art protein model trained on masked language modeling objectives, suitable for protein sequence analysis and prediction tasks
Protein Model
Transformers

E
facebook
640.23k
41
Macbert4csc Scalarmix Base Chinese
Apache-2.0
A masked language model fine-tuned on MacBERT for Chinese typo correction
Large Language Model
Transformers

M
x180
15
1
Bert Base Buddhist Sanskrit
A BERT-based masked language model for Buddhist Sanskrit text processing, specifically designed for handling Buddhist Sanskrit texts
Large Language Model
Transformers

B
Matej
31
3
Tavbert Tr
A BERT-like masked language model for Turkish, operating at the character level, pre-trained using SpanBERT-style character span masking.
Large Language Model
Transformers Other

T
tau
15
1
Rust Cl Tohoku Bert Large Japanese
This is a version of Tohoku University's BERT Large Japanese model converted for use in Rust
Large Language Model Japanese
R
Yokohide031
15
1
Tapas Small Masklm
TAPAS (Table Parser) is a table-based pre-trained language model developed by Google Research, specifically designed for processing tabular data and natural language queries.
Large Language Model
Transformers

T
google
14
1
Tapas Large Masklm
TAPAS is a pre-trained language model based on tabular data, specifically designed for natural language tasks related to tables.
Large Language Model
Transformers

T
google
15
2
Kobert Lm
Apache-2.0
KoBERT-LM is a pre-trained language model optimized for Korean, based on the BERT architecture and further pre-trained specifically for Korean text.
Large Language Model Korean
K
monologg
49
1
Bert L12 H240 A12
A variant of the BERT model pre-trained using knowledge distillation technology, with a hidden layer dimension of 240 and 12 attention heads, suitable for masked language modeling tasks.
Large Language Model
Transformers

B
eli4s
7
2
Bangla Bert Base
MIT
Bangla BERT Base is a pre-trained Bengali language model based on the BERT architecture, supporting various downstream NLP tasks.
Large Language Model Other
B
sagorsarker
7,282
21
Tapas Medium Masklm
TAPAS is a table-based pre-trained language model specifically designed for processing tabular data and related queries.
Large Language Model
Transformers

T
google
14
1
Indonesian Roberta Base
MIT
Indonesian masked language model based on RoBERTa architecture, trained on the OSCAR corpus with a validation accuracy of 62.45%
Large Language Model Other
I
flax-community
1,013
11
Featured Recommended AI Models